There’s no doubt that generative artificial intelligence (AI) has empowered bad actors to create sophisticated scams. In 2023, the FTC reported that consumers lost $2.7 billion to imposter scams with fraud attacks being the most commonly reported scam category for the year.
It takes just three seconds of audio to clone a person’s voice, giving scammers an easy avenue to launch a broad range of scams and disinformation attacks. Seventy-three percent of Americans are concerned about AI-generated deepfake robocalls that mimic the voice of a loved one to try and scam them out of money.
In an effort to protect Americans, the FCC earlier this year rendered AI-generated robocalls without the consent of the called party illegal. The move is designed to combat malicious voice cloning, allowing for company fines and the blocking of these calls. While this is a tangible step, Americans must remain vigilant because bad actors don’t stop – they adapt. In this blog, we discuss five of the ways consumers can protect their voices.
1. Switch to automated voicemail messages.
If you are like most consumers, you may set up a customized voicemail greeting that callers hear when trying to reach you. However, these recordings are long enough for bad actors to record and capture the voice into cloning platforms.
Luckily, there is a simple fix. To change your voicemail from your voice to the automated message provided by your wireless service provider on iPhone, simply click on the voicemail icon in the phone app, then select greeting in the top left corner and select default. For Android users, open the Phone app, tap on the voicemail icon or dial your voicemail number, follow the prompts to access voicemail settings and select the option to reset or revert to the default greeting.
2. Create a family safe word.
One of the most common imposter scams that targets American consumers is the ‘imposter family member scam’. Sometimes designed to target older family members, bad actors use cloned voices to mimic the voice of a loved one and make it sound as if they are in peril and need immediate financial insistence.
To avoid falling victim to these scams, consider having a safe word that only your family knows to use in emergencies or during suspected cloning activity.
3. Limit social media recordings.
People should always be mindful of what they post on social media channels, but following the rise of AI voice cloning scams, they should be especially careful. Phrases like “help me” make it extremely easy for bad actors to capture and make those clone voices sound as if they are in danger.
If you are active posting video content on social media channels, including a designated safe word throughout is a subtle way for friends and family to know if a bad actor has hacked into your social platform and posted AI-generated content featuring you.
4. Avoid voice biometric verification.
Accounts that use voice biometric verification are becoming an increasing target. Any time a person needs to create new speech samples to log into accounts, these voice samples are often saved to consumers’ phones, making them easy targets for bad actors to capture and manipulate.
Facial recognition, not without its own limitations, does provide an alternative mechanism for protecting sensitive information.
5. Do not speak first to unknown numbers.
If you answer a call from an unknown number, wait for the person or voice on the other end of the line to speak first. As mentioned earlier, bad actors only need a few seconds to record your voice, so the less you say, the better.
As AI-driven scams become increasingly sophisticated, protecting your voice from bad actors is more crucial than ever. By implementing these five measures, you may reduce the risk of falling victim to AI-generated scams.
Greg Bohl is Chief Data Officer at TNS with specific responsibility for TNS’ Communications Market solutions.